59 research outputs found

    Prediction in Photovoltaic Power by Neural Networks

    Get PDF
    The ability to forecast the power produced by renewable energy plants in the short and middle term is a key issue to allow a high-level penetration of the distributed generation into the grid infrastructure. Forecasting energy production is mandatory for dispatching and distribution issues, at the transmission system operator level, as well as the electrical distributor and power system operator levels. In this paper, we present three techniques based on neural and fuzzy neural networks, namely the radial basis function, the adaptive neuro-fuzzy inference system and the higher-order neuro-fuzzy inference system, which are well suited to predict data sequences stemming from real-world applications. The preliminary results concerning the prediction of the power generated by a large-scale photovoltaic plant in Italy confirm the reliability and accuracy of the proposed approaches

    Deep Neural Networks for Multivariate Prediction of Photovoltaic Power Time Series

    Get PDF
    The large-scale penetration of renewable energy sources is forcing the transition towards the future electricity networks modeled on the smart grid paradigm, where energy clusters call for new methodologies for the dynamic energy management of distributed energy resources and foster to form partnerships and overcome integration barriers. The prediction of energy production of renewable energy sources, in particular photovoltaic plants that suffer from being highly intermittent, is a fundamental tool in the modern management of electrical grids shifting from reactive to proactive, with also the help of advanced monitoring systems, data analytics and advanced demand side management programs. The gradual move towards a smart grid environment impacts not only the operating control/management of the grid, but also the electricity market. The focus of this article is on advanced methods for predicting photovoltaic energy output that prove, through their accuracy and robustness, to be useful tools for an efficient system management, even at prosumer's level and for improving the resilience of smart grids. Four different deep neural models for the multivariate prediction of energy time series are proposed; all of them are based on the Long Short-Term Memory network, which is a type of recurrent neural network able to deal with long-term dependencies. Additionally, two of these models also use Convolutional Neural Networks to obtain higher levels of abstraction, since they allow to combine and filter different time series considering all the available information. The proposed models are applied to real-world energy problems to assess their performance and they are compared with respect to the classic univariate approach that is used as a reference benchmark. The significance of this work is to show that, once trained, the proposed deep neural networks ensure their applicability in real online scenarios characterized by high variability of data, without requiring retraining and end-user's tricks

    On Effects of Compression with Hyperdimensional Computing in Distributed Randomized Neural Networks

    Full text link
    A change of the prevalent supervised learning techniques is foreseeable in the near future: from the complex, computational expensive algorithms to more flexible and elementary training ones. The strong revitalization of randomized algorithms can be framed in this prospect steering. We recently proposed a model for distributed classification based on randomized neural networks and hyperdimensional computing, which takes into account cost of information exchange between agents using compression. The use of compression is important as it addresses the issues related to the communication bottleneck, however, the original approach is rigid in the way the compression is used. Therefore, in this work, we propose a more flexible approach to compression and compare it to conventional compression algorithms, dimensionality reduction, and quantization techniques.Comment: 12 pages, 3 figure

    Multi-damage detection in composite space structures via deep learning

    Get PDF
    The diagnostics of environmentally induced damages in composite structures plays a critical role for ensuring the operational safety of space platforms. Recently, spacecraft have been equipped with lightweight and very large substructures, such as antennas and solar panels, to meet the performance demands of modern payloads and scientific instruments. Due to their large surface, these components are more susceptible to impacts from orbital debris compared to other satellite locations. However, the detection of debris-induced damages still proves challenging in large structures due to minimal alterations in the spacecraft global dynamics and calls for advanced structural health monitoring solutions. To address this issue, a data-driven methodology using Long Short-Term Memory (LSTM) networks is applied here to the case of damaged solar arrays. Finite element models of the solar panels are used to reproduce damage locations, which are selected based on the most critical risk areas in the structures. The modal parameters of the healthy and damaged arrays are extracted to build the governing equations of the flexible spacecraft. Standard attitude manoeuvres are simulated to generate two datasets, one including local accelerations and the other consisting of piezoelectric voltages, both measured in specific locations of the structure. The LSTM architecture is then trained by associating each sensed time series with the corresponding damage label. The performance of the deep learning approach is assessed, and a comparison is presented between the accuracy of the two distinct sets of sensors: accelerometers and piezoelectric patches. In both cases, the framework proved effective in promptly identifying the location of damaged elements within limited measured time samples

    2-D convolutional deep neural network for the multivariate prediction of photovoltaic time series

    Get PDF
    Here, we propose a new deep learning scheme to solve the energy time series prediction problem. The model implementation is based on the use of Long Short-Term Memory networks and Convolutional Neural Networks. These techniques are combined in such a fashion that interdependencies among several different time series can be exploited and used for forecasting purposes by filtering and joining their samples. The resulting learning scheme can be summarized as a superposition of network layers, resulting in a stacked deep neural architecture. We proved the accuracy and robustness of the proposed approach by testing it on real-world energy problems

    Lung ultrasound in systemic sclerosis: correlation with high-resolution computed tomography, pulmonary function tests and clinical variables of disease

    Get PDF
    Interstitial lung disease (ILD) is a hallmark of systemic sclerosis (SSc). Although high-resolution computed tomography (HRCT) is the gold standard to diagnose ILD, recently lung ultrasound (LUS) has emerged in SSc patients as a new promising technique for the ILD evaluation, noninvasive and radiation-free. The aim of this study was to evaluate if there is a correlation between LUS, chest HRCT, pulmonary function tests findings and clinical variables of the disease. Thirty-nine patients (33 women and 6 men; mean age 51 ± 15.2 years) underwent clinical examination, HRCT, pulmonary function tests and LUS for detection of B-lines. A positive correlation exists between the number of B-lines and the HRCT score (r = 0.81, p < 0.0001), conversely a negative correlation exists between the number of B-lines and diffusing capacity of the lung for carbon monoxide (DLCO) (r = −0.63, p < 0.0001). The number of B-lines increases along with the progression of the capillaroscopic damage. A statistically significant difference in the number of B-lines was found between patients with and without digital ulcers [42 (3–84) vs 16 (4–55)]. We found that the number of B-lines increased with the progression of both HRCT score and digital vascular damage. LUS may therefore, be a useful tool to determine the best timing for HRCT execution, thus, preventing for many patients a continuous and useless exposure to ionizing radiatio

    Perceptron theory can predict the accuracy of neural networks

    Get PDF
    Multilayer neural networks set the current state of the art for many technical classification problems. But, these networks are still, essentially, black boxes in terms of analyzing them and predicting their performance. Here, we develop a statistical theory for the one-layer perceptron and show that it can predict performances of a surprisingly large variety of neural networks with different architectures. A general theory of classification with perceptrons is developed by generalizing an existing theory for analyzing reservoir computing models and connectionist models for symbolic reasoning known as vector symbolic architectures. Our statistical theory offers three formulas leveraging the signal statistics with increasing detail. The formulas are analytically intractable, but can be evaluated numerically. The description level that captures maximum details requires stochastic sampling methods. Depending on the network model, the simpler formulas already yield high prediction accuracy. The quality of the theory predictions is assessed in three experimental settings, a memorization task for echo state networks (ESNs) from reservoir computing literature, a collection of classification datasets for shallow randomly connected networks, and the ImageNet dataset for deep convolutional neural networks. We find that the second description level of the perceptron theory can predict the performance of types of ESNs, which could not be described previously. Furthermore, the theory can predict deep multilayer neural networks by being applied to their output layer. While other methods for prediction of neural networks performance commonly require to train an estimator model, the proposed theory requires only the first two moments of the distribution of the postsynaptic sums in the output neurons. Moreover, the perceptron theory compares favorably to other methods that do not rely on training an estimator model

    A review of the enabling methodologies for knowledge discovery from smart grids data

    Get PDF
    The large-scale deployment of pervasive sensors and decentralized computing in modern smart grids is expected to exponentially increase the volume of data exchanged by power system applications. In this context, the research for scalable and flexible methodologies aimed at supporting rapid decisions in a data rich, but information limited environment represents a relevant issue to address. To this aim, this paper investigates the role of Knowledge Discovery from massive Datasets in smart grid computing, exploring its various application fields by considering the power system stakeholder available data and knowledge extraction needs. In particular, the aim of this paper is dual. In the first part, the authors summarize the most recent activities developed in this field by the Task Force on “Enabling Paradigms for High-Performance Computing in Wide Area Monitoring Protective and Control Systems” of the IEEE PSOPE Technologies and Innovation Subcommittee. Differently, in the second part, the authors propose the development of a data-driven forecasting methodology, which is modeled by considering the fundamental principles of Knowledge Discovery Process data workflow. Furthermore, the described methodology is applied to solve the load forecasting problem for a complex user case, in order to emphasize the potential role of knowledge discovery in supporting post processing analysis in data-rich environments, as feedback for the improvement of the forecasting performances

    RETI NEURALI QUANTISTICHE RANDOM VECTOR FUNCTIONAL-LINK

    No full text
    corecore